Newton-type Methods
نویسنده
چکیده
Newton’s method is one of the most famous numerical methods. Its origins, as the name suggests, lies in part with Newton, but the form familiar to us today is due to Simpson of “Simpson’s Rule” fame. Originally proposed to find the roots of polynomials it was Simpson who proposed using it for general nonlinear equations and for its use in optimization by finding a root of the gradient. Here we discuss Newton’s method for both of these problems with the focus on the somewhat harder optimization problem. A disadvantage of Newton’s method, in its original form, is that it has only local convergence unless the class of functions is severely restricted. Much of the discussion here will be about the many ways Newton’s method may be modified to achieve global convergence. A key aim of all these methods is that once the iterates become sufficiently close to a solution the method takes Newton steps.
منابع مشابه
A New High Order Closed Newton-Cotes Trigonometrically-fitted Formulae for the Numerical Solution of the Schrodinger Equation
In this paper, we investigate the connection between closed Newton-Cotes formulae, trigonometrically-fitted methods, symplectic integrators and efficient integration of the Schr¨odinger equation. The study of multistep symplectic integrators is very poor although in the last decades several one step symplectic integrators have been produced based on symplectic geometry (see the relevant lit...
متن کاملEvaluation of estimation methods for parameters of the probability functions in tree diameter distribution modeling
One of the most commonly used statistical models for characterizing the variations of tree diameter at breast height is Weibull distribution. The usual approach for estimating parameters of a statistical model is the maximum likelihood estimation (likelihood method). Usually, this works based on iterative algorithms such as Newton-Raphson. However, the efficiency of the likelihood method is not...
متن کاملProximal Newton-Type Methods for Minimizing Composite Functions
We generalize Newton-type methods for minimizing smooth functions to handle a sum of two convex functions: a smooth function and a nonsmooth function with a simple proximal mapping. We show that the resulting proximal Newton-type methods inherit the desirable convergence behavior of Newton-type methods for minimizing smooth functions, even when search directions are computed inexactly. Many pop...
متن کاملProjected Newton-type Methods in Machine Learning
We consider projected Newton-type methods for solving large-scale optimization problems arising in machine learning and related fields. We first introduce an algorithmic framework for projected Newton-type methods by reviewing a canonical projected (quasi-)Newton method. This method, while conceptually pleasing, has a high computation cost per iteration. Thus, we discuss two variants that are m...
متن کاملError Bounds and Superlinear Convergence Analysis of Some Newton-type Methods in Optimization
We show that, for some Newton-type methods such as primal-dual interior-point path following methods and Chen-Mangasarian smoothing methods, local superlinear convergence can be shown without assuming the solutions are isolated. The analysis is based on local error bounds on the distance from the iterates to the solution set.
متن کاملConvergence analysis of inexact proximal Newton-type methods
We study inexact proximal Newton-type methods to solve convex optimization problems in composite form: minimize x∈Rn f(x) := g(x) + h(x), where g is convex and continuously differentiable and h : R → R is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. Proximal Newton-type methods require the solution of subproblems to obtain the search ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010